KPCA-based training of a kernel fuzzy classifier with ellipsoidal regions
نویسندگان
چکیده
In a fuzzy classifier with ellipsoidal regions, a fuzzy rule, which is based on the Mahalanobis distance, is defined for each class. Then the fuzzy rules are tuned so that the recognition rate of the training data is maximized. In most cases, one fuzzy rule per one class is enough to obtain high generalization ability. But in some cases, we need to partition the class data to define more than one rule per class. In this paper, instead of partitioning the class data, we map the input space into the high dimensional feature space and then generate a fuzzy classifier with ellipsoidal regions in the feature space. We call this classifier kernel fuzzy classifier with ellipsoidal regions. To speed up training, first we select independent training data that span the subspace in the feature space and calculate the kernel principal components. By this method, we can avoid using singular value decomposition, which leads to training speedup. In the feature space, training data are usually degenerate. Namely, the space spanned by the mapped training data is a proper subspace. Thus, if the mapped test data are in the complementary subspace, the Mahalanobis distance may become erroneous and thus the probability of misclassification becomes high. To overcome this problem, we propose transductive training: in training, we add basis vectors of the input space as unlabelled data; and in classification, if mapped unknown data are not in the subspace we expand the subspace so that they are included. We demonstrate the effectiveness of our method by computer simulations.
منابع مشابه
SUBCLASS FUZZY-SVM CLASSIFIER AS AN EFFICIENT METHOD TO ENHANCE THE MASS DETECTION IN MAMMOGRAMS
This paper is concerned with the development of a novel classifier for automatic mass detection of mammograms, based on contourlet feature extraction in conjunction with statistical and fuzzy classifiers. In this method, mammograms are segmented into regions of interest (ROI) in order to extract features including geometrical and contourlet coefficients. The extracted features benefit from...
متن کاملTuning membership functions of kernel fuzzy classifiers by maximizing margins
We propose two methods for tuning membership functions of a kernel fuzzy classifier based on the idea of SVM (support vector machine) training. We assume that in a kernel fuzzy classifier a fuzzy rule is defined for each class in the feature space. In the first method, we tune the slopes of the membership functions at the same time so that the margin between classes is maximized under the const...
متن کاملSemi-supervised training of a Kernel PCA-Based Model for Word Sense Disambiguation
In this paper, we introduce a new semi-supervised learning model for word sense disambiguation based on Kernel Principal Component Analysis (KPCA), with experiments showing that it can further improve accuracy over supervised KPCA models that have achieved WSD accuracy superior to the best published individual models. Although empirical results with supervised KPCA models demonstrate significan...
متن کاملAnalog Circuit Intelligent Fault Diagnosis Based on Greedy Kpca and One-against-rest Svm Approach
Fault diagnosis of analog circuits is essential for guaranteeing the reliability and maintainability of electronic systems. A novel analog circuit fault diagnosis approach based on greedy kernel principal component analysis (KPCA) and one-against-rest support vector machine (OARSVM) is proposed in this paper. In order to obtain a successful fault classifier, eliminating noise and extracting fau...
متن کاملA New Feature Extraction Method Based on the Information Fusion of Entropy Matrix and Covariance Matrix and Its Application in Face Recognition
The classic principal components analysis (PCA), kernel PCA (KPCA) and linear discriminant analysis (LDA) feature extraction methods evaluate the importance of components according to their covariance contribution, not considering the entropy contribution, which is important supplementary information for the covariance. To further improve the covariance-based methods such as PCA (or KPCA), this...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Int. J. Approx. Reasoning
دوره 37 شماره
صفحات -
تاریخ انتشار 2004